6 research outputs found

    CRDTs: Consistency without concurrency control

    Get PDF
    A CRDT is a data type whose operations commute when they are concurrent. Replicas of a CRDT eventually converge without any complex concurrency control. As an existence proof, we exhibit a non-trivial CRDT: a shared edit buffer called Treedoc. We outline the design, implementation and performance of Treedoc. We discuss how the CRDT concept can be generalised, and its limitations

    Composing Relaxed Transactions

    Get PDF
    As the classical transactional abstraction is sometimes considered too restrictive in leveraging parallelism, a lot of work has been devoted to devising relaxed transactional models with the goal of improving concurrency. Nevertheless, the quest for improving concurrency has somehow led to neglect one of the most appealing aspects of transactions: software composition, namely, the ability to develop pieces of software independently and compose them into applications that behave correctly in the face of concurrency. Indeed, a closer look at relaxed transactional models reveals that they do jeopardize composition, raising the fundamental question whether it is at all possible to devise such models while preserving composition. This paper shows that the answer is positive. We present outheritance, a necessary and sufficient condition for a (potentially relaxed) transactional memory to support composition. Basically, outheritance requires child transactions to pass their conflict information to their parent transaction, which in turn maintains this information until commit time. Concrete instantiations of this idea have been used before, classical transactions being the most prevalent example, but we believe to be the first to capture this as a general principle as well as to prove that it is, strictly speaking, equivalent to ensuring composition. We illustrate the benefits of outheritance using elastic transactions and show how they can satisfy outheritance and provide composition without hampering concurrency. We leverage this to present a new (transactional) Java package, a composable alternative to the concurrency package of the JDK, and evaluate efficiency through an implementation that speeds up state of the art software transactional memory implementations (TL2, LSA, SwissTM) by almost a factor of 3

    Inferring Scalability from Program Pseudocode

    No full text
    Recent trends have led hardware manufacturers to place multiple processing cores on a single chip, making parallel programming the intended way of taking advantage of the increased processing power. However, bringing concurrency to average programmers is considered to be one of the major challenges in computer science today. The difficulty lies not only in writing correct parallel programs, but also in achieving the required efficiency and performance. For parallel programs, performance is not only about obtaining low execution times on a fixed number of cores, but also about maintaining efficiency as the number of available cores is increased. Ideally, programmers should have in their toolkit techniques that can be used when designing parallel programs, before any code is available for testing on production hardware, and which are then able to predict scalability once the program is implemented. Existing methods are either unreliable at predicting scalability, such as the case of disjoint-access parallelism, or do not apply to lock-based programs, which currently make up a large part of existing concurrent programs. Furthermore, using some of these techniques is so complicated that it outweighs the time required to implement, debug and test the program on real hardware. In this thesis we study the problem of predicting the scalability of concurrent programs without implementing them. This allows programmers in the design phase of a concurrent algorithm to choose only one or a few promising solutions that will be implemented, debugged and tested on production hardware. We first consider disjoint-access parallelism, an existing property that applies only to a very restricted class of programs. After an extensive practical evaluation spanning across a variety of scenarios, we find it to be ineffective at predicting scalability. For predicting the scalability of more general concurrent algorithms, we propose the obstruction degree, a new scalability metric based on the consistency requirements of algorithms. It applies to programs using locks, invalidation primitives and transactional memory. Our metric allows programmers to compare two given algorithms as well as predict their scalability limit, the maximum number of processors to which they can scale, thus allowing programmers to choose the appropriate size hardware for running their programs. We also examine the composition of relaxed memory transactions in order to combine the ease of programming offered by transactional memory with the increased scalability of transactions that circumvent the traditional transactional model. We present outheritance, a property we show to be both necessary and sufficient for ensuring the correct composition of relaxed transactions, and we show how to calculate the obstruction degree of compositions that use this new property. We use outheritance to build OE-STM, a new software transactional memory algorithm having elastic transactions that correctly compose

    Obstruction degree: measuring concurrency in shared memory systems

    Get PDF
    When choosing a priori one of several algorithms for implementing an abstraction, programmers turn to complexity theory to predict which solution would perform best. In the sequential world, this approach usually measures the number of steps that the algorithm needs to perform. As the world turns toward multicore computers, the number of steps required to execute an operation is no longer the only concern, but instead algorithms should perform better when running on more cores, i.e. scale well. Because of this, algorithms that perform more steps but where processes interfere with each other less are sometimes preferred. Most existing complexity measures for concurrent programs do not encompass locks, although a large part of today’s programs are built this way, making them unable to compare a program using locks to an alternative that avoids them. Furthermore, they consider the worst case that the algorithm can encounter while executing, thus failing to give predictions for the common case, the one that typically arises during execution. This paper presents the notion of obstruction degree, a metric for predicting the scalability of concurrent algorithms before implementing them. We illustrate its simplicity and wide application range using various lock-based and lock-free algorithms, which we then split into classes of equivalence. We convey the accuracy of the scalability predictions made using the obstruction degree through experimental measures. 1

    Consistency without concurrency control in large, dynamic systems

    Get PDF
    International audienceReplicas of a commutative replicated data type (CRDT) eventually converge without any complex concurrency control. We validate the design of a non-trivial CRDT, a replicated sequence, with performance measurements in the context of Wikipedia. Furthermore, we discuss how to eliminate a remaining scalability bottleneck: Whereas garbage collection previously required a system-wide consensus, here we propose a flexible two-tier architecture and a protocol for migrating between tiers. We also discuss how the CRDT concept can be generalised, and its limitations

    Consistency without concurrency control in large, dynamic systems

    No full text
    corecore